Software rendering

In the context of rendering (computer graphics), software rendering refers to a rendering process that is unaided by any specialized graphics hardware, such as a graphics card. The rendering takes place entirely in the CPU. Rendering everything with the (general-purpose) CPU has the main advantage that it is not restricted to the (limited) capabilities of graphics hardware.

Software rendering can be split into two main categories: real-time rendering, and offline rendering. Real-time rendering is used to interactively render a scene, like in 3D computer games, and generally each frame must be rendered in a few miliseconds. Offline rendering is used to create realistic images and movies, where each frame can take hours or days to complete.

Contents

Real-time software rendering

For real-time rendering the focus is on performance. The earliest texture mapped real-time software renderers for PCs used many tricks to create the illusion of 3D vision. (True 3D was limited to flat or Gouraud-shaded polygons employed mainly in flight simulation genres.) Wolfenstein 3D was restricted to one height of floor and ceiling, Doom introduced stairs and elevators, and Duke Nukem 3D allowed a limited form of looking up and down as well as allowed slanted floors. The technology used in these games is currently categorized as 2.5D. One of the first 'real' textured 3D games, allowing six degrees of freedom (three movement axes, three rotation axes), was Descent, which featured 3D models entirely made from polygons instead of sprites. Voxel-based graphics also gained popularity for fast and relatively detailed terrain rendering but later polygons took over completely. The 3D game revolution started with Quake, which features a technically superior software renderer by Michael Abrash and John Carmack (founder of id Software). With its popularity, Quake (which later got extensions for hardware acceleration) and other 3D games of that time helped the sales of graphics cards, and more games started using hardware APIs like DirectX and OpenGL. Although this also announced the death of software rendering as the primary rendering technology, many games before 2000 still had a software renderer as a fallback. Most notably Unreal and Unreal Tournament feature a software renderer that is able to produce enjoyable quality and performance and made use of CPU instruction set extensions like MMX. One of the last high-end games using only a software renderer was Outcast, which featured advanced voxel technology but also texture filtering and bumpmapping as found on graphics hardware.

In the video game console market, the evolution of 3D was more abrupt, because of the fixed platform specifications, allowing vendors to design custom hardware without the constraints of backward compatibility. 16 bit consoles gained RISC accelerator cartridges in games such as StarFox and Virtua Racing which implemented software rendering through tailored instruction sets. The Atari Jaguar and 3DO were the first consoles to be targeted at 3D, but it was the Sony PlayStation with its revolutionary textured triangle throughput and powerful geometry coprocessor that really popularized 3D gaming. This was the first time that 3D produced a compelling mainstream experience rather than being a specialist gimmick (aside from Doom's 2.5D experience, although at the time that was dependent on relatively high-end PC hardware).

Because there are still PC systems being sold with limited graphics cards (or none at all), software rendering will always be required for some applications. Games for kids and casual gamers (who use outdated systems or systems primarily meant for office applications) can have a need for a software renderer as a fallback. For example Toy Story 2 Action Game has a choice of selecting either hardware or software rendering before playing the game while others like Half-Life default to software mode and can be adjusted to use OpenGL or DirectX in the Options menu. Some 3D modeling software also feature software renderers for visualization. And finally the emulation and verification of hardware also requires a software renderer. An example of the latter is the Direct3D reference rasterizer.

But even for high-end graphics, the 'art' of software rendering hasn't completely died out. While early graphics cards were much faster than software renderers and originally had better quality and more features, it restricted the developer to 'fixed-function' pixel processing. Quickly there came a need for diversification of the looks of games. Software rendering has no restrictions because an arbitrary program is executed. So graphics cards reintroduced this programmability, by executing small programs per vertex and per pixel/fragment, also known as shaders. Shader languages, such as High Level Shader Language (HLSL) for DirectX or the OpenGL Shading Language (GLSL), are C-like programming languages for shaders and start to show some resemblance with (arbitrary function) software rendering.

Since the adoption of graphics hardware as the primary means for real-time rendering, CPU performance has grown steadily as ever. This allowed for new software rendering technologies to emerge. Although largely overshadowed by the performance of hardware rendering, some modern real-time software renderers manage to combine a broad feature set and reasonable performance (for a software renderer), by making use of specialized dynamic compilation and advanced instruction set extensions like SSE. Although nowadays the dominance of hardware rendering over software rendering is undisputed because of unparalleled performance, features, and continuing innovation, some believe that CPUs and GPUs will converge one way or another and the line between software and hardware rendering will fade.[1]

Offline rendering

Contrary to real-time rendering, performance is only of second priority with offline rendering. It is used mainly in the film industry to create high-quality renderings of lifelike scenes. Many special effects in today's movies are entirely or partially created by computer graphics. For example, the character of Gollum in the Peter Jackson The Lord of the Rings films is completely computer-generated imagery (CGI). Also for animation movies, CGI is gaining popularity. Most notably Pixar has produced a series of movies such as Toy Story and Finding Nemo, and the Blender Foundation the world's first open movie Elephants Dream.

Because of the need for very high-quality and diversity of effects, offline rendering requires a lot of flexibility. Even though commercial real-time graphics hardware is getting higher quality and more programmable by the day, most photorealistic CGI still requires software rendering. Pixar's RenderMan, for example, allows shaders of unlimited length and complexity, demanding a general-purpose processor. Techniques for high realism like raytracing and global illumination are also inherently unsuited for hardware implementation and in most cases are realized purely in software.

See also

References

External links